40 research outputs found

    Q Learning Behavior on Autonomous Navigation of Physical Robot

    Get PDF
    Behavior based architecture gives robot fast and reliable action. If there are many behaviors in robot, behavior coordination is needed. Subsumption architecture is behavior coordination method that give quick and robust response. Learning mechanism improve robot’s performance in handling uncertainty. Q learning is popular reinforcement learning method that has been used in robot learning because it is simple, convergent and off policy. In this paper, Q learning will be used as learning mechanism for obstacle avoidance behavior in autonomous robot navigation. Learning rate of Q learning affect robot’s performance in learning phase. As the result, Q learning algorithm is successfully implemented in a physical robot with its imperfect environment

    Behavior Coordination Methods on Autonomous Navigation of Physical Robot

    Get PDF
    Behavior based architecture gives robot fast and reliable action. If there are many behaviors in robot, behavior coordination is needed. Subsumption architecture and motor schema is example of behavior coordination methods. In order to study those methods characteristics, computer simulation is not enough, experiments in physical robot are needed to be done. It can be concluded from experiment result that the first method gives quick, robust but non smooth response. Meanwhile the latter gives smooth but slower response, and it is tend to reach target faster than the first one. Some limitation of physical robot experiment also presented here

    Penerapan Behavior Based Architecture dan Q Learning pada Sistem Navigasi Otonom Hexapod Robot

    Get PDF
    Hexapod robot banyak digunakan karena kestabilan dan fleksibilitas pola geraknya. Pada penelitian ini akan didesain hexapod dengan arsitektur behavior based yang bersifat cepat dan reaktif terhadap masukan dari dunia luar. Selain itu akan diterapkan Q learning sebagai algoritma pembelajaran robot, sehingga robot dapat mengantisipasi hal – hal tak terduga di lingkungannya. Dari hasil simulasi nampak bahwa penerapan arsitektur behavior based dan algoritma pembelajaran Q learning berhasil digunakan untuk sistem navigasi otonom robot yang bertujuan untuk menemukan target berupa sumber cahaya. Kata kunci : hexapod robot, behavior based architecture, q learning, navigasi otono

    Analisa Performansi Dan Robustness Beberapa Metode Tuning Kontroler PID Pada Motor DC

    Full text link
    PID controller is a wllknown controller that still used in industry nowadays. The important thing in designing is tuning the controller\u27s parameter. Several tuning methods that described here are Ziegler-Nichols, Cohen-Coon, and Direct Synthesis. By implementing PID controller on DC motor, we will analyze performance and Robustness of the system. Generally, Ziegler-Nichols and Cohen-Coon methods give better performance (amount of rise time is about 0.1 s and settling time is under 1 s), and better Robustness too (amount of phase margin is about 400). But if the nonlinearity approximation is applied because of DC motor\u27s limitation, Direct Synthesis method give much better performance

    Tool Use Learning for a Real Robot

    Get PDF
    A robot may need to use a tool to solve a complex problem. Currently, tool use must be pre-programmed by a human. However, this is a difficult task and can be helped if the robot is able to learn how to use a tool by itself. Most of the work in tool use learning by a robot is done using a feature-based representation. Despite many successful results, this representation is limited in the types of tools and tasks that can be handled. Furthermore, the complex relationship between a tool and other world objects cannot be captured easily. Relational learning methods have been proposed to overcome these weaknesses [1, 2]. However, they have only been evaluated in a sensor-less simulation to avoid the complexities and uncertainties of the real world. We present a real world implementation of a relational tool use learning system for a robot. In our experiment, a robot requires around ten examples to learn to use a hook-like tool to pull a cube from a narrow tube

    A cognitive robot equipped with autonomous tool innovation expertise

    Get PDF
    Like a human, a robot may benefit from being able to use a tool to solve a complex task. When an appropriate tool is not available, a very useful ability for a robot is to create a novel one based on its experience. With the advent of inexpensive 3D printing, it is now possible to give robots such an ability, at least to create simple tools. We proposed a method for learning how to use an object as a tool and, if needed, to design and construct a new tool. The robot began by learning an action model of tool use for a PDDL planner by observing a trainer. It then refined the model by learning by trial and error. Tool creation consisted of generalising an existing tool model and generating a novel tool by instantiating the general model. Further learning by experimentation was performed. Reducing the search space of potentially useful tools could be achieved by providing a tool ontology. We then used a constraint solver to obtain numerical parameters from abstract descriptions and use them for a ready-to-print design. We evaluated our system using a simulated and a real Baxter robot in two cases: hook and wedge. We found that our system performs tool creation successfully

    Behaviors Coordination and Learning on Autonomous Navigation of Physical Robot

    Get PDF
     Behaviors coordination is one of keypoints in behavior based robotics. Subsumption architecture and motor schema are example of their methods. In order to study their characteristics, experiments in physical robot are needed to be done. It can be concluded from experiment result that the first method gives quick, robust but non smooth response. Meanwhile the latter gives slower but smoother response and it is tending to reach target faster. Learning behavior improve robot’s performance in handling uncertainty. Q learning is popular reinforcement learning method that has been used in robot learning because it is simple, convergent and off policy. The learning rate of Q affects robot’s performance in learning phase. Q learning algorithm is implemented in subsumption architecture of physical robot. As the result, robot succeeds to do autonomous navigation task although it has some limitations in relation with sensor placement and characteristic

    Teleautonomous Control on Rescue Robot Prototype

    Get PDF
    Robot application in disaster area can help responder team to save victims. In order to finish task, robot must have flexible movement mechanism so it can pass through uncluttered area. Passive linkage can be used on robot chassis so it can give robot flexibility. On physical experiments, robot is succeeded to move through gravels and 5 cm obstacle. Rescue robot also has specialized control needs. Robot must able to be controlled remotely. It also must have ability to move autonomously. Teleautonomous control method is combination between those methods. It can be concluded from experiments that on teleoperation mode, operator must get used to see environment through robot’s camera. While on autonomous mode, robot is succeeded to avoid obstacle and search target based on sensor reading and controller program. On teleautonomous mode, robot can change control mode by using bluetooth communication for data transfer, so robot control will be more flexible

    Aplikasi Scada System Pada Miniatur Water Level Control

    Full text link
    The paper discuss the implementation of Miniature Water Level Control which is made as a simulation from the industrial process. These project is equipped with SCADA to give a real visualisation of industrial process. To make the controlling and monitoring process easier, the project is made in small size (miniature model) which is easy to be brought (portable). .Miniature water level control was made as a plant which is controlled by PLC. There are two PLCs which is used, i.e. OMRON CPM1 and MODICON TSX Micro 3721. The computer, in this case the SCADA software, will visualize the process which is happened on the plant. From the experiment which is done, the plant can give the good response and keep the water level on the desired height. The system is stable when the height of water is the same as the set point. PID Control bias method (pump_voltage = bias + error) is used to run these system

    Communication Between TSX Micro 37-21 PLCs on Beverage Production Plant

    Full text link
    Programmable Logic Controller (PLC) is the most famous controller in industry. Because the process in industry is getting more and more complex than before, the USAge of more than one PLC is needed and the communication between them have to be existed. This paper describes about the study of communicating two Schneider TSX Micro 37-21 PLCs in order to control the miniature of beverage production. The usual (and expensive) way to communicate them is using converter module TSX P ACC 01. Less expensive way uses original converter cable TSX PCX 1031 that connect to the ports of PLCs (TER and AUX port) which will be explained here. PL7 Pro V 4.0 software is used to program the PLCs. The communication cannot be established due to some improper addressing. The exploration is continued to the serial communication. It is found that the system uses RS 485
    corecore